Penalty Functions for Genetic Programming Algorithms
نویسندگان
چکیده
Very often symbolic regression, as addressed in Genetic Programming (GP), is equivalent to approximate interpolation. This means that, in general, GP algorithms try to fit the sample as better as possible but no notion of generalization error is considered. As a consequence, overfitting, code-bloat and noisy data are problems which are not satisfactorily solved under this approach. Motivated by this situation we review the problem of Symbolic Regression under the perspective of Machine Learning, a well founded mathematical toolbox for predictive learning. We perform empirical comparisons between classical statistical methods (AIC and BIC) and methods based on Vapnik-Chrevonenkis (VC) theory for regression problems under genetic training. Empirical comparisons of the different methods suggest practical advantages of VCbased model selection. We conclude that VC theory provides methodological framework for complexity control in Genetic Programming even when its technical results seems not be directly applicable. As main practical advantage, precise penalty functions founded on the notion of generalization error are proposed for evolving GP-trees.
منابع مشابه
THE EFFECTS OF INITIAL SAMPLING AND PENALTY FUNCTIONS IN OPTIMAL DESIGN OF TRUSSES USING METAHEURISTIC ALGORITHMS
Although Genetic algorithm (GA), Ant colony (AC) and Particle swarm optimization algorithm (PSO) have already been extended to various types of engineering problems, the effects of initial sampling beside constraints in the efficiency of algorithms, is still an interesting field. In this paper we show that, initial sampling with a special series of constraints play an important role in the conv...
متن کاملMinimizing a General Penalty Function on a Single Machine via Developing Approximation Algorithms and FPTASs
This paper addresses the Tardy/Lost penalty minimization on a single machine. According to this penalty criterion, if the tardiness of a job exceeds a predefined value, the job will be lost and penalized by a fixed value. Besides its application in real world problems, Tardy/Lost measure is a general form for popular objective functions like weighted tardiness, late work and tardiness with reje...
متن کاملIntegrating Goal Programming, Taylor Series, Kuhn-Tucker Conditions, and Penalty Function Approaches to Solve Linear Fractional Bi-level Programming Problems
In this paper, we integrate goal programming (GP), Taylor Series, Kuhn-Tucker conditions and Penalty Function approaches to solve linear fractional bi-level programming (LFBLP)problems. As we know, the Taylor Series is having the property of transforming fractional functions to a polynomial. In the present article by Taylor Series we obtain polynomial objective functions which are equivalent...
متن کاملEvolutionary algorithms approach to the solution of mixed integer non-linear programming problems
The global optimization of mixed integer non-linear problems (MINLP), constitutes a major area of research in many engineering applications. In this work, a comparison is made between an algorithm based on Simulated Annealing (M-SIMPSA) and two Evolutionary Algorithms: Genetic Algorithms (GAs) and Evolution Strategies (ESs). Results concerning the handling of constraints, through penalty functi...
متن کاملSolving a generalized aggregate production planning problem by genetic algorithms
This paper presents a genetic algorithm (GA) for solving a generalized model of single-item resource-constrained aggregate production planning (APP) with linear cost functions. APP belongs to a class of pro-duction planning problems in which there is a single production variable representing the total production of all products. We linearize a linear mixed-integer model of APP subject to hiring...
متن کامل